In this paper, we propose a novel framework dubbed peer learning to deal with the problem of biased scene graph generation (SGG). This framework uses predicate sampling and consensus voting (PSCV) to encourage different peers to learn from each other, improving model diversity and mitigating bias in SGG. To address the heavily long-tailed distribution of predicate classes, we propose to use predicate sampling to divide and conquer this issue. As a result, the model is less biased and makes more balanced predicate predictions. Specifically, one peer may not be sufficiently diverse to discriminate between different levels of predicate distributions. Therefore, we sample the data distribution based on frequency of predicates into sub-distributions, selecting head, body, and tail classes to combine and feed to different peers as complementary predicate knowledge during the training process. The complementary predicate knowledge of these peers is then ensembled utilizing a consensus voting strategy, which simulates a civilized voting process in our society that emphasizes the majority opinion and diminishes the minority opinion. This approach ensures that the learned representations of each peer are optimally adapted to the various data distributions. Extensive experiments on the Visual Genome dataset demonstrate that PSCV outperforms previous methods. We have established a new state-of-the-art (SOTA) on the SGCls task by achieving a mean of \textbf{31.6}.
translated by 谷歌翻译
Audio-Visual scene understanding is a challenging problem due to the unstructured spatial-temporal relations that exist in the audio signals and spatial layouts of different objects and various texture patterns in the visual images. Recently, many studies have focused on abstracting features from convolutional neural networks while the learning of explicit semantically relevant frames of sound signals and visual images has been overlooked. To this end, we present an end-to-end framework, namely attentional graph convolutional network (AGCN), for structure-aware audio-visual scene representation. First, the spectrogram of sound and input image is processed by a backbone network for feature extraction. Then, to build multi-scale hierarchical information of input features, we utilize an attention fusion mechanism to aggregate features from multiple layers of the backbone network. Notably, to well represent the salient regions and contextual information of audio-visual inputs, the salient acoustic graph (SAG) and contextual acoustic graph (CAG), salient visual graph (SVG), and contextual visual graph (CVG) are constructed for the audio-visual scene representation. Finally, the constructed graphs pass through a graph convolutional network for structure-aware audio-visual scene recognition. Extensive experimental results on the audio, visual and audio-visual scene recognition datasets show that promising results have been achieved by the AGCN methods. Visualizing graphs on the spectrograms and images have been presented to show the effectiveness of proposed CAG/SAG and CVG/SVG that could focus on the salient and semantic relevant regions.
translated by 谷歌翻译
Coverage path planning is a major application for mobile robots, which requires robots to move along a planned path to cover the entire map. For large-scale tasks, coverage path planning benefits greatly from multiple robots. In this paper, we describe Turn-minimizing Multirobot Spanning Tree Coverage Star(TMSTC*), an improved multirobot coverage path planning (mCPP) algorithm based on the MSTC*. Our algorithm partitions the map into minimum bricks as tree's branches and thereby transforms the problem into finding the maximum independent set of bipartite graph. We then connect bricks with greedy strategy to form a tree, aiming to reduce the number of turns of corresponding circumnavigating coverage path. Our experimental results show that our approach enables multiple robots to make fewer turns and thus complete terrain coverage tasks faster than other popular algorithms.
translated by 谷歌翻译
众所周知,很难拥有一个可靠且强大的框架来将多代理深入强化学习算法与实用的多机器人应用联系起来。为了填补这一空白,我们为称为MultiroBolearn1的多机器人系统提出并构建了一个开源框架。该框架构建了统一的模拟和现实应用程序设置。它旨在提供标准的,易于使用的模拟方案,也可以轻松地将其部署到现实世界中的多机器人环境中。此外,该框架为研究人员提供了一个基准系统,以比较不同的强化学习算法的性能。我们使用不同类型的多代理深钢筋学习算法在离散和连续的动作空间中使用不同类型的多代理深钢筋学习算法来证明框架的通用性,可扩展性和能力。
translated by 谷歌翻译
我们研究了一个实用的问题,但尚未探讨问题:从不同飞行高度的角度来看,无人机如何在环境中感知。与始终从地面角度进行感知的自动驾驶不同,由于特定的任务,飞行无人机可能会灵活地改变其飞行高度,这需要能力才能使视点不变感知。为了减少飞行数据注释的努力,我们考虑了一种地面到意见知识蒸馏方法,同时仅使用地面视点的标记数据和飞行视点的未标记数据。为此,我们提出了一个渐进的半监督学习框架,该框架具有四个核心组成部分:一个密集的观点采样策略,将垂直飞行高度的范围分配为一组均匀分布的小部分,在每个高度下,我们采样了从该角度来看的数据;最近的邻居伪标记,以在前一个视点上学习的模型来注入最近的邻居视点的标签; MixView在不同观点之间生成增强图像以减轻观点差异;以及逐渐学习的渐进蒸馏策略,直到达到最大飞行高度为止。我们收集一个合成的数据集和一个现实世界数据集,我们进行了广泛的实验,以表明我们的方法为不同的飞行高度带来了有希望的结果。
translated by 谷歌翻译
我们研究无数据知识蒸馏(KD)进行单眼深度估计(MDE),该网络通过在教师学生框架下从训练有素的专家模型中压缩,同时缺乏目标领域的培训数据,从而学习了一个轻巧的网络,以实现现实世界深度感知。 。由于密集回归和图像识别之间的本质差异,因此以前的无数据KD方法不适用于MDE。为了加强现实世界中的适用性,在本文中,我们试图使用分布式模拟图像应用KD。主要的挑战是i)缺乏有关原始培训数据的对象分布的先前信息; ii)领域在现实世界和模拟之间的转移。为了应对第一个难度,我们应用对象图像混合以生成新的训练样本,以最大程度地覆盖目标域中对象的分布模式。为了解决第二个困难,我们建议利用一个有效学习的转换网络,以将模拟数据拟合到教师模型的特征分布中。我们评估了各种深度估计模型和两个不同数据集的建议方法。结果,我们的方法优于基线KD的优势,甚至在$ 1/6 $的图像中获得的性能略高,表现出了明显的优势。
translated by 谷歌翻译
近年来,场景图的生成取得了巨大进展。但是,其内在的谓词类别的长尾分布是一个具有挑战性的问题。几乎所有现有的场景图生成(SGG)方法都遵循相同的框架,在该框架中,他们使用类似的骨干网络进行对象检测以及用于场景图生成的自定义网络。这些方法通常设计复杂的上下文编码器,以提取场景上下文的内在相关性W.R.T固有的谓词和复杂的网络,以提高网络模型的学习能力,以实现高度不平衡的数据分布。为了解决无偏的SGG问题,我们提出了一种简单而有效的方法,称为上下文感知的专家(COME),以改善模型多样性并减轻没有复杂设计的有偏见的SGG。具体而言,我们建议使用专家的混合物来纠正谓词类的大量长尾分布,这适用于大多数无偏见的场景图生成器。与关系专家的混合在一起,以鸿沟和合奏方式解决了谓词的长尾分布。结果,减轻了偏置的SGG,模型倾向于做出更平衡的谓词预测。但是,具有相同重量的专家不足以区分不同水平的谓词分布。因此,我们只是使用构建上下文感知的编码器来帮助网络动态利用丰富的场景特征,以进一步提高模型的多样性。通过利用图像的上下文信息,每个专家W.R.T的重要性是动态分配的。我们已经对视觉基因组数据集上的三个任务进行了广泛的实验,以表明在以前的方法上取得了优越的性能。
translated by 谷歌翻译
对于大规模的大规模任务,多机器人系统(MRS)可以通过利用每个机器人的不同功能,移动性和功能来有效提高效率。在本文中,我们关注大规模平面区域的多机器人覆盖路径计划(MCPP)问题,在机器人资源有限的环境中具有随机的动态干扰。我们介绍了一个工人站MR,由多名工人组成,实际上有有限的实际工作资源,一个站点提供了足够的资源来补充资源。我们旨在通过将其作为完全合作的多代理增强学习问题来解决工人站MRS的MCPP问题。然后,我们提出了一种端到端分散的在线计划方法,该方法同时解决了工人的覆盖范围计划,并为车站的集合计划。我们的方法设法减少随机动态干扰对计划的影响,而机器人可以避免与它们发生冲突。我们进行仿真和真实的机器人实验,比较结果表明,我们的方法在解决任务完成时间指标的MCPP问题方面具有竞争性能。
translated by 谷歌翻译
深度完成旨在预测从深度传感器(例如Lidars)中捕获的极稀疏图的密集像素深度。它在各种应用中起着至关重要的作用,例如自动驾驶,3D重建,增强现实和机器人导航。基于深度学习的解决方案已经证明了这项任务的最新成功。在本文中,我们首次提供了全面的文献综述,可帮助读者更好地掌握研究趋势并清楚地了解当前的进步。我们通过通过对现有方法进行分类的新型分类法提出建议,研究网络体系结构,损失功能,基准数据集和学习策略的设计方面的相关研究。此外,我们在包括室内和室外数据集(包括室内和室外数据集)上进行了三个广泛使用基准测试的模型性能进行定量比较。最后,我们讨论了先前作品的挑战,并为读者提供一些有关未来研究方向的见解。
translated by 谷歌翻译
In this paper, we propose a unified whole-body control framework for velocity-controlled mobile collaborative robots which can distribute task motion into the arm and mobile base according to specific task requirements by adjusting weighting factors. Our framework focuses on addressing two challenging issues in whole-body coordination: 1) different dynamic characteristics of the mobile base and the arm; 2) avoidance of violating both safety and configuration constraints. In addition, our controller involves Coupling Dynamic Movement Primitives to enable the essential capabilities for collaboration and interaction applications, such as obstacle avoidance, human teaching, and compliance control. Based on these, we design an adaptive motion mode for intuitive physical human-robot interaction through adjusting the weighting factors. The proposed controller is in closed-form and thus quite computationally efficient. Several typical experiments carried out on a real mobile collaborative robot validate the effectiveness of the proposed controller.
translated by 谷歌翻译